3,594 research outputs found

    Polymerase-endonuclease amplification reaction for large-scale enzymatic production of antisense oligonucleotide

    Get PDF
    Synthetic oligonucleotides are contaminated with highly homologous failure sequences. Oligonucleotide synthesis is difficult to scale up because it requires expensive equipments, hazardous chemicals, and tedious purification process. Here we report a novel thermocyclic reaction, polymerase-endonuclease amplification reaction (PEAR), for the amplification of oligonucleotides. A target oligonucleotide and a tandem repeated antisense probe are subjected to repeated cycles of denaturing, annealing, elongation and cleaving, in which thermostable DNA polymerase elongation and strand slipping generate duplex tandem repeats, and thermostable endonuclease (PspGI) cleavage releases monomeric duplex oligonucleotides. Each round of PEAR achieves >100-fold amplification. The product can be used in one more round of PEAR directly, and the process can be further repeated. In addition to avoiding dangerous materials and improved product purity, this reaction is easy to scale up and amenable to full automation, so it has the potential to be a useful tool for large-scale production of antisense oligonucleotide drugs

    Unsupervised Learning of Visual Representations using Videos

    Full text link
    Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to the same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52% mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4%. We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation
    • …
    corecore